11 research outputs found

    Visual tracking using structural local DCT sparse appearance model with occlusion detection

    Get PDF
    In this paper, a structural local DCT sparse appearance model with occlusion detection is proposed for visual tracking in a particle filter framework. The energy compaction property of the 2D-DCT is exploited to reduce the size of the dictionary as well as that of the candidate samples so that the computational cost of l1-minimization can be lowered. Further, a holistic image reconstruction procedure is proposed for robust occlusion detection and used for appearance model update, thus avoiding the degradation of the appearance model in the presence of occlusion/outliers. Also, a patch occlusion ratio is introduced in the confidence score computation to enhance the tracking performance. Quantitative and qualitative performance evaluations on two popular benchmark datasets demonstrate that the proposed tracking algorithm generally outperforms several state-of-the-art methods

    Visual Tracking Based on Correlation Filter and Robust Coding in Bilateral 2DPCA Subspace

    Get PDF
    The success of correlation filters in visual tracking has attracted much attention in computer vision due to their high efficiency and performance. However, they are not equipped with a mechanism to cope with challenging situations like scale variations, out-of-view, and camera motion. With the aim of dealing with such situations, a collaborative scheme of tracking based on the discriminative and generative models is proposed. Instead of finding all the affine motion parameters of the target by the combined likelihood of these models, the correlation filters, based on discriminative model, are used to find the position of the target, whereas 2D robust coding in a bilateral 2DPCA subspace, based on generative model, is used to find the other affine motion parameters of the target. Further, a 2D robust coding distance is proposed to differentiate the candidate samples from the subspace and used to compute the observation likelihood in the generative model. In addition, it is proposed to generate a robust occlusion map from the weights obtained during the residual minimization and a novel update mechanism of the appearance model for both the correlation filters and bilateral 2DPCA subspace is proposed. The proposed method is evaluated on the challenging image sequences available in the OTB-50, VOT2016, and UAV20L benchmark datasets, and its performance is compared with that of the state-of-the-art tracking algorithms. In contrast to OTB-50 and VOT2016, the dataset UAV20L contains long duration sequences with additional challenges introduced by both the camera motion and the view points in three dimensions. Quantitative and qualitative performance evaluations on three benchmark datasets demonstrate that the proposed tracking algorithm outperforms the state-of-the-art methods

    Automatic foot ulcer segmentation using conditional generative adversarial network (AFSegGAN): A wound management system.

    No full text
    Effective wound care is essential to prevent further complications, promote healing, and reduce the risk of infection and other health issues. Chronic wounds, particularly in older adults, patients with disabilities, and those with pressure, venous, or diabetic foot ulcers, cause significant morbidity and mortality. Due to the positive trend in the number of individuals with chronic wounds, particularly among the growing elderly and diabetes populations, it is imperative to develop novel technologies and practices for the best practice clinical management of chronic wounds to minimize the potential health and economic burdens on society. As wound care is managed in hospitals and community care, it is crucial to have quantitative metrics like wound boundary and morphological features. The traditional visual inspection technique is purely subjective and error-prone, and digitization provides an appealing alternative. Various deep-learning models have earned confidence; however, their accuracy primarily relies on the image quality, the dataset size to learn the features, and experts' annotation. This work aims to develop a wound management system that automates wound segmentation using a conditional generative adversarial network (cGAN) and estimate the wound morphological parameters. AFSegGAN was developed and validated on the MICCAI 2021-foot ulcer segmentation dataset. In addition, we use adversarial loss and patch-level comparison at the discriminator network to improve the segmentation performance and balance the GAN network training. Our model outperformed state-of-the-art methods with a Dice score of 93.11% and IoU of 99.07%. The proposed wound management system demonstrates its abilities in wound segmentation and parameter estimation, thereby reducing healthcare workers' efforts to diagnose or manage wounds and facilitating remote healthcare

    2D-spectral estimation based on DCT and modified magnitude group delay

    Get PDF
    This paper proposes two new 2D-spectral estimation methods. The 2D-modified magnitude group delay (MMGD) is applied to 2D-discrete Fourier transform (2DDFT) for the first and to the analytic 2D-discrete Cosine transform for the second. The analytic 2D-DCT preserves the desirable properties of the DCT (like, improved frequency resolution, leakage and detectability) and is realized by a 2D-discrete cosine transform (2D-DCT) and its Hilbert transform. The 2D-MMGD is an extension from 1D to 2D, and it reduces the variance preserving the original frequency resolution of 2D-DFT or 2D-analytic DCT, depending upon to which is applied. The first and the second methods are referred to as DFT-MMGD and DCT-MMGD, respectively. The proposed methods are applied to 2D sinusoids and 2D AR process, associated with Gaussian white noise. The performance of the DCT-MMGD is found to be superior to that of DFT-MMGD in terms of variance, frequency resolution and detectability. The performance of DFT-MMGD and DCT-MMGD is better than that of 2D-LP method even when the signal to noise ratio is low
    corecore